466 research outputs found

    A Compromise between Neutrino Masses and Collider Signatures in the Type-II Seesaw Model

    Full text link
    A natural extension of the standard SU(2)L×U(1)YSU(2)_{\rm L} \times U(1)_{\rm Y} gauge model to accommodate massive neutrinos is to introduce one Higgs triplet and three right-handed Majorana neutrinos, leading to a 6×66\times 6 neutrino mass matrix which contains three 3×33\times 3 sub-matrices MLM_{\rm L}, MDM_{\rm D} and MRM_{\rm R}. We show that three light Majorana neutrinos (i.e., the mass eigenstates of νe\nu_e, νμ\nu_\mu and ντ\nu_\tau) are exactly massless in this model, if and only if ML=MDMR1MDTM_{\rm L} = M_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T exactly holds. This no-go theorem implies that small but non-vanishing neutrino masses may result from a significant but incomplete cancellation between MLM_{\rm L} and MDMR1MDTM_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T terms in the Type-II seesaw formula, provided three right-handed Majorana neutrinos are of O(1){\cal O}(1) TeV and experimentally detectable at the LHC. We propose three simple Type-II seesaw scenarios with the A4×U(1)XA_4 \times U(1)_{\rm X} flavor symmetry to interpret the observed neutrino mass spectrum and neutrino mixing pattern. Such a TeV-scale neutrino model can be tested in two complementary ways: (1) searching for possible collider signatures of lepton number violation induced by the right-handed Majorana neutrinos and doubly-charged Higgs particles; and (2) searching for possible consequences of unitarity violation of the 3×33\times 3 neutrino mixing matrix in the future long-baseline neutrino oscillation experiments.Comment: RevTeX 19 pages, no figure

    Binary credal classification under sparsity constraints.

    Get PDF
    Binary classification is a well known problem in statistics. Besides classical methods, several techniques such as the naive credal classifier (for categorical data) and imprecise logistic regression (for continuous data) have been proposed to handle sparse data. However, a convincing approach to the classification problem in high dimensional problems (i.e., when the number of attributes is larger than the number of observations) is yet to be explored in the context of imprecise probability. In this article, we propose a sensitivity analysis based on penalised logistic regression scheme that works as binary classifier for high dimensional cases. We use an approach based on a set of likelihood functions (i.e. an imprecise likelihood, if you like), that assigns a set of weights to the attributes, to ensure a robust selection of the important attributes, whilst training the model at the same time, all in one fell swoop. We do a sensitivity analysis on the weights of the penalty term resulting in a set of sparse constraints which helps to identify imprecision in the dataset

    Testing the paradox of enrichment along a land use gradient in a multitrophic aboveground and belowground community

    Get PDF
    In the light of ongoing land use changes, it is important to understand how multitrophic communities perform at different land use intensities. The paradox of enrichment predicts that fertilization leads to destabilization and extinction of predator-prey systems. We tested this prediction for a land use intensity gradient from natural to highly fertilized agricultural ecosystems. We included multiple aboveground and belowground trophic levels and land use-dependent searching efficiencies of insects. To overcome logistic constraints of field experiments, we used a successfully validated simulation model to investigate plant responses to removal of herbivores and their enemies. Consistent with our predictions, instability measured by herbivore-induced plant mortality increased with increasing land use intensity. Simultaneously, the balance between herbivores and natural enemies turned increasingly towards herbivore dominance and natural enemy failure. Under natural conditions, there were more frequently significant effects of belowground herbivores and their natural enemies on plant performance, whereas there were more aboveground effects in agroecosystems. This result was partly due to the “boom-bust” behavior of the shoot herbivore population. Plant responses to herbivore or natural enemy removal were much more abrupt than the imposed smooth land use intensity gradient. This may be due to the presence of multiple trophic levels aboveground and belowground. Our model suggests that destabilization and extinction are more likely to occur in agroecosystems than in natural communities, but the shape of the relationship is nonlinear under the influence of multiple trophic interactions.

    Time spent with cats is never wasted: Lessons learned from feline acromegalic cardiomyopathy, a naturally occurring animal model of the human disease

    Get PDF
    <div><p>Background</p><p>In humans, acromegaly due to a pituitary somatotrophic adenoma is a recognized cause of increased left ventricular (LV) mass. Acromegalic cardiomyopathy is incompletely understood, and represents a major cause of morbidity and mortality. We describe the clinical, echocardiographic and histopathologic features of naturally occurring feline acromegalic cardiomyopathy, an emerging disease among domestic cats.</p><p>Methods</p><p>Cats with confirmed hypersomatotropism (IGF-1>1000ng/ml and pituitary mass; n = 67) were prospectively recruited, as were two control groups: diabetics (IGF-1<800ng/ml; n = 24) and healthy cats without known endocrinopathy or cardiovascular disease (n = 16). Echocardiography was performed in all cases, including after hypersomatotropism treatment where applicable. Additionally, tissue samples from deceased cats with hypersomatotropism, hypertrophic cardiomyopathy and age-matched controls (n = 21 each) were collected and systematically histopathologically reviewed and compared.</p><p>Results</p><p>By echocardiography, cats with hypersomatotropism had a greater maximum LV wall thickness (6.5mm, 4.1–10.1mm) than diabetic (5.9mm, 4.2–9.1mm; Mann Whitney, p<0.001) or control cats (5.2mm, 4.1–6.5mm; Mann Whitney, p<0.001). Left atrial diameter was also greater in cats with hypersomatotropism (16.6mm, 13.0–29.5mm) than in diabetic (15.4mm, 11.2–20.3mm; Mann Whitney, p<0.001) and control cats (14.0mm, 12.6–17.4mm; Mann Whitney, p<0.001). After hypophysectomy and normalization of IGF-1 concentration (n = 20), echocardiographic changes proved mostly reversible. As in humans, histopathology of the feline acromegalic heart was dominated by myocyte hypertrophy with interstitial fibrosis and minimal myofiber disarray.</p><p>Conclusions</p><p>These results demonstrate cats could be considered a naturally occurring model of acromegalic cardiomyopathy, and as such help elucidate mechanisms driving cardiovascular remodeling in this disease.</p></div

    The Born supremacy: quantum advantage and training of an Ising Born machine

    Get PDF
    The search for an application of near-term quantum devices is widespread. Quantum Machine Learning is touted as a potential utilisation of such devices, particularly those which are out of the reach of the simulation capabilities of classical computers. In this work, we propose a generative Quantum Machine Learning Model, called the Ising Born Machine (IBM), which we show cannot, in the worst case, and up to suitable notions of error, be simulated efficiently by a classical device. We also show this holds for all the circuit families encountered during training. In particular, we explore quantum circuit learning using non-universal circuits derived from Ising Model Hamiltonians, which are implementable on near term quantum devices. We propose two novel training methods for the IBM by utilising the Stein Discrepancy and the Sinkhorn Divergence cost functions. We show numerically, both using a simulator within Rigetti's Forest platform and on the Aspen-1 16Q chip, that the cost functions we suggest outperform the more commonly used Maximum Mean Discrepancy (MMD) for differentiable training. We also propose an improvement to the MMD by proposing a novel utilisation of quantum kernels which we demonstrate provides improvements over its classical counterpart. We discuss the potential of these methods to learn `hard' quantum distributions, a feat which would demonstrate the advantage of quantum over classical computers, and provide the first formal definitions for what we call `Quantum Learning Supremacy'. Finally, we propose a novel view on the area of quantum circuit compilation by using the IBM to `mimic' target quantum circuits using classical output data only.Comment: v3 : Close to journal published version - significant text structure change, split into main text & appendices. See v2 for unsplit version; v2 : Typos corrected, figures altered slightly; v1 : 68 pages, 39 Figures. Comments welcome. Implementation at https://github.com/BrianCoyle/IsingBornMachin

    Heterogeneity of fractional anisotropy and mean diffusivity measurements by in vivo diffusion tensor imaging in normal human hearts

    Get PDF
    Background: Cardiac diffusion tensor imaging (cDTI) by cardiovascular magnetic resonance has the potential to assess microstructural changes through measures of fractional anisotropy (FA) and mean diffusivity (MD). However, normal variation in regional and transmural FA and MD is not well described. Methods: Twenty normal subjects were scanned using an optimised cDTI sequence at 3T in systole. FA and MD were quantified in 3 transmural layers and 4 regional myocardial walls. Results: FA was higher in the mesocardium (0.46 ±0.04) than the endocardium (0.40 ±0.04, p≤0.001) and epicardium (0.39 ±0.04, p≤0.001). On regional analysis, the FA in the septum was greater than the lateral wall (0.44 ±0.03 vs 0.40 ±0.05 p = 0.04). There was a transmural gradient in MD increasing towards the endocardium (epicardium 0.87 ±0.07 vs endocardium 0.91 ±0.08×10-3 mm2/s, p = 0.04). With the lateral wall (0.87 ± 0.08×10-3 mm2/s) as the reference, the MD was higher in the anterior wall (0.92 ±0.08×10-3 mm2/s, p = 0.016) and septum (0.92 ±0.07×10-3 mm2/s, p = 0.028). Transmurally the signal to noise ratio (SNR) was greatest in the mesocardium (14.5 ±2.5 vs endocardium 13.1 ±2.2, p<0.001; vs epicardium 12.0 ± 2.4, p<0.001) and regionally in the septum (16.0 ±3.4 vs lateral wall 11.5 ± 1.5, p<0.001). Transmural analysis suggested a relative reduction in the rate of change in helical angle (HA) within the mesocardium. Conclusions: In vivo FA and MD measurements in normal human heart are heterogeneous, varying significantly transmurally and regionally. Contributors to this heterogeneity are many, complex and interactive, but include SNR, variations in cardiac microstructure, partial volume effects and strain. These data indicate that the potential clinical use of FA and MD would require measurement standardisation by myocardial region and layer, unless pathological changes substantially exceed the normal variation identified

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV

    Get PDF
    The performance of muon reconstruction, identification, and triggering in CMS has been studied using 40 inverse picobarns of data collected in pp collisions at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection criteria covering a wide range of physics analysis needs have been examined. For all considered selections, the efficiency to reconstruct and identify a muon with a transverse momentum pT larger than a few GeV is above 95% over the whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4, while the probability to misidentify a hadron as a muon is well below 1%. The efficiency to trigger on single muons with pT above a few GeV is higher than 90% over the full eta range, and typically substantially better. The overall momentum scale is measured to a precision of 0.2% with muons from Z decays. The transverse momentum resolution varies from 1% to 6% depending on pseudorapidity for muons with pT below 100 GeV and, using cosmic rays, it is shown to be better than 10% in the central region up to pT = 1 TeV. Observed distributions of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO

    Performance of the CMS Cathode Strip Chambers with Cosmic Rays

    Get PDF
    The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device in the CMS endcaps. Their performance has been evaluated using data taken during a cosmic ray run in fall 2008. Measured noise levels are low, with the number of noisy channels well below 1%. Coordinate resolution was measured for all types of chambers, and fall in the range 47 microns to 243 microns. The efficiencies for local charged track triggers, for hit and for segments reconstruction were measured, and are above 99%. The timing resolution per layer is approximately 5 ns
    corecore